85 research outputs found
Submap Matching for Stereo-Vision Based Indoor/Outdoor SLAM
Autonomous robots operating in semi- or unstructured environments, e.g. during search and rescue missions, require methods for online on-board creation of maps to support path planning and obstacle avoidance. Perception based on stereo cameras is well suited for mixed indoor/outdoor environments. The creation of full 3D maps in GPS-denied areas however is still a challenging task for current robot systems, in particular due to depth errors resulting from stereo reconstruction. State-of-the-art 6D SLAM approaches employ graph-based optimization on the relative transformations between keyframes or local submaps. To achieve loop closures, correct data association is crucial, in particular for sensor input received at different points in time. In order to approach this challenge, we propose a novel method for submap matching. It is based on robust keypoints, which we derive from local obstacle classification. By describing geometrical 3D features, we achieve invariance to changing viewpoints and varying light conditions. We performed experiments in indoor, outdoor and mixed environments. In all three scenarios we achieved a final 3D position error of less than 0.23% of the full trajectory. In addition, we compared our approach with a 3D RBPF SLAM from previous work, achieving an improvement of at least 27% in mean 2D localization accuracy in different scenarios
The LRU Rover for Autonomous Planetary Exploration and its Success in the SpaceBotCamp Challenge
The task of planetary exploration poses many challenges for a robot system, from weight and size constraints to sensors and actuators suitable for extraterrestrial environment conditions. As there is a significant communication delay to other planets, the efficient operation of a robot system requires a high level of autonomy. In this work, we present the Light Weight Rover Unit (LRU), a small and agile rover prototype that we designed for the challenges of planetary exploration. Its locomotion system with individually steered wheels allows for high maneuverability in rough terrain and the application of stereo cameras as its main sensor ensures the applicability to space missions. We implemented software components for self-localization in GPS-denied environments, environment mapping, object search and localization and for the autonomous pickup and assembly of objects with its arm. Additional high-level mission control components facilitate both autonomous behavior and remote monitoring of the system state over a delayed communication link. We successfully demonstrated the autonomous capabilities of our LRU at the SpaceBotCamp challenge, a national robotics contest with focus on autonomous planetary exploration. A robot had to autonomously explore a moon-like rough-terrain environment, locate and collect two objects and assemble them after transport to a third object - which the LRU did on its first try, in half of the time and fully autonomous
Control with a Compliant Force-Torque Sensor
There are assembly tasks which require a compliant device at the end-effector since possible disturbances are beyond the bandwidth of robot control. This paper discusses a compliant force-torque sensor for assembly. Two aspects are explained in detail: Force control considering a significant force dependent displacement, and control of an end-effector with an elastic mounting during fast unconstrained motion. The latter uses an adaptive scheme which serves as a further level in a hierarchical position-based control. Experimental results are given which show the limits of industrial robots
Sample Consensus Fitting of Bivariate Polynomials for Initializing EM-based Modeling of Smooth 3D Surfaces
This paper presents a method for finding the
largest, connected, smooth surface in noisy depth images. The
formulation of the fitting in a Sample Consensus way allows
the use of RANSAC (or any other similar estimator), and
makes the method tolerant to low percentage of inliers in the
input. Therefore it can be used to simultaneously segment and
model the surface of interest. This is important in applications
like analyzing physical properties of Carbon-fiber-reinforced
polymer (CFRP) structures.
Using bivariate polynomials for modeling turns out to be
advantageous, allowing to capture the variations along the two
directions on the surface. However, fitting them efficiently using
RANSAC is not straightforward. We present the necessary preand
post-processing, distance and normal direction checks, and
degree optimization (lowering the order of the polynomial),
and evaluate how these improve results. Finally, to improve
the initial estimate provided by RANSAC, an Expectation
Maximization approach is employed, converging to the best
solution.
The method was tested on high-quality data and as well
on real-world scenes captured by a RGB-D camera. We will
publish the method as part of the Point Cloud Library
Efficient Camera-Based Pose Estimation for Real-Time Applications
Accurate online localization is crucial for mobile
robotics. In this paper, we describe a real-time image-based
localization technique, which is based on a single calibrated
camera. This can be supported by a second camera to improve
accuracy and to provide the correct translational scale. Our goal
is a robust and unbiased pose estimation in highly dynamic
scenes on resource-limited systems. The presented approach
is characterized through significantly improved robustness of
the pose estimation, a novel approach for stereo subpixel
accurate landmark initialization, and the speed-up of conventional
tracking routines to achieve online capability. Although
the algorithm is designed for accurate, online short-range
egomotion estimation in hand-held scanning devices, it can be
used for any mobile robot application as shown in this paper.
Various tests and experimental results with a mobile platform
and a hand-held 3D modeler are presented and discussed
- …